Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Neuroscience ; 543: 101-107, 2024 Apr 05.
Artigo em Inglês | MEDLINE | ID: mdl-38432549

RESUMO

In natural viewing conditions, the brain can optimally integrate retinal and extraretinal signals to maintain a stable visual perception. These mechanisms, however, may fail in circumstances where extraction of a motion signal is less viable such as impoverished visual scenes. This can result in a phenomenon known as autokinesis in which one may experience apparent motion of a small visual stimulus in an otherwise completely dark environment. In this study, we examined the effect of autokinesis on visual perception of motion in human observers. We used a novel method with optical tracking in which the visual motion was reported manually by the observer. Experiment results show at lower speeds of motion, the perceived direction of motion was more aligned with the effect of autokinesis, whereas in the light or at higher speeds in the dark, it was more aligned with the actual direction of motion. These findings have important implications for understanding how the stability of visual representation in the brain can affect accurate perception of motion signals.


Assuntos
Percepção de Movimento , Humanos , Percepção Visual , Visão Ocular , Desempenho Psicomotor , Retina
2.
IEEE Int Conf Robot Autom ; 2023: 4724-4731, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38125032

RESUMO

In the last decade, various robotic platforms have been introduced that could support delicate retinal surgeries. Concurrently, to provide semantic understanding of the surgical area, recent advances have enabled microscope-integrated intraoperative Optical Coherent Tomography (iOCT) with high-resolution 3D imaging at near video rate. The combination of robotics and semantic understanding enables task autonomy in robotic retinal surgery, such as for subretinal injection. This procedure requires precise needle insertion for best treatment outcomes. However, merging robotic systems with iOCT introduces new challenges. These include, but are not limited to high demands on data processing rates and dynamic registration of these systems during the procedure. In this work, we propose a framework for autonomous robotic navigation for subretinal injection, based on intelligent real-time processing of iOCT volumes. Our method consists of an instrument pose estimation method, an online registration between the robotic and the iOCT system, and trajectory planning tailored for navigation to an injection target. We also introduce intelligent virtual B-scans, a volume slicing approach for rapid instrument pose estimation, which is enabled by Convolutional Neural Networks (CNNs). Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method. Finally, we discuss identified challenges in this work and suggest potential solutions to further the development of such systems.

3.
IEEE Trans Vis Comput Graph ; 29(11): 4503-4513, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37788205

RESUMO

Human cognition relies on embodiment as a fundamental mechanism. Virtual avatars allow users to experience the adaptation, control, and perceptual illusion of alternative bodies. Although virtual bodies have medical applications in motor rehabilitation and therapeutic interventions, their potential for learning anatomy and medical communication remains underexplored. For learners and patients, anatomy, procedures, and medical imaging can be abstract and difficult to grasp. Experiencing anatomies, injuries, and treatments virtually through one's own body could be a valuable tool for fostering understanding. This work investigates the impact of avatars displaying anatomy and injuries suitable for such medical simulations. We ran a user study utilizing a skeleton avatar and virtual injuries, comparing to a healthy human avatar as a baseline. We evaluate the influence on embodiment, well-being, and presence with self-report questionnaires, as well as motor performance via an arm movement task. Our results show that while both anatomical representation and injuries increase feelings of eeriness, there are no negative effects on embodiment, well-being, presence, or motor performance. These findings suggest that virtual representations of anatomy and injuries are suitable for medical visualizations targeting learning or communication without significantly affecting users' mental state or physical control within the simulation.


Assuntos
Gráficos por Computador , Ilusões , Humanos , Emoções , Simulação por Computador , Comunicação
4.
Artigo em Inglês | MEDLINE | ID: mdl-37555199

RESUMO

Robotic X-ray C-arm imaging systems can precisely achieve any position and orientation relative to the patient. Informing the system, however, what pose exactly corresponds to a desired view is challenging. Currently these systems are operated by the surgeon using joysticks, but this interaction paradigm is not necessarily effective because users may be unable to efficiently actuate more than a single axis of the system simultaneously. Moreover, novel robotic imaging systems, such as the Brainlab Loop-X, allow for independent source and detector movements, adding even more complexity. To address this challenge, we consider complementary interfaces for the surgeon to command robotic X-ray systems effectively. Specifically, we consider three interaction paradigms: (1) the use of a pointer to specify the principal ray of the desired view relative to the anatomy, (2) the same pointer, but combined with a mixed reality environment to synchronously render digitally reconstructed radiographs from the tool's pose, and (3) the same mixed reality environment but with a virtual X-ray source instead of the pointer. Initial human-in-the-loop evaluation with an attending trauma surgeon indicates that mixed reality interfaces for robotic X-ray system control are promising and may contribute to substantially reducing the number of X-ray images acquired solely during "fluoro hunting" for the desired view or standard plane.

5.
Artigo em Inglês | MEDLINE | ID: mdl-37555198

RESUMO

Magnetic Resonance Imaging (MRI) is a medical imaging modality that allows for the evaluation of soft-tissue diseases and the assessment of bone quality. Preoperative MRI volumes are used by surgeons to identify defected bones, perform the segmentation of lesions, and generate surgical plans before the surgery. Nevertheless, conventional intraoperative imaging modalities such as fluoroscopy are less sensitive in detecting potential lesions. In this work, we propose a 2D/3D registration pipeline that aims to register preoperative MRI with intraoperative 2D fluoroscopic images. To showcase the feasibility of our approach, we use the core decompression procedure as a surgical example to perform 2D/3D femur registration. The proposed registration pipeline is evaluated using digitally reconstructed radiographs (DRRs) to simulate the intraoperative fluoroscopic images. The resulting transformation from the registration is later used to create overlays of preoperative MRI annotations and planning data to provide intraoperative visual guidance to surgeons. Our results suggest that the proposed registration pipeline is capable of achieving reasonable transformation between MRI and digitally reconstructed fluoroscopic images for intraoperative visualization applications.

6.
Artigo em Inglês | MEDLINE | ID: mdl-37021885

RESUMO

The use of Augmented Reality (AR) for navigation purposes has shown beneficial in assisting physicians during the performance of surgical procedures. These applications commonly require knowing the pose of surgical tools and patients to provide visual information that surgeons can use during the performance of the task. Existing medical-grade tracking systems use infrared cameras placed inside the Operating Room (OR) to identify retro-reflective markers attached to objects of interest and compute their pose. Some commercially available AR Head-Mounted Displays (HMDs) use similar cameras for self-localization, hand tracking, and estimating the objects' depth. This work presents a framework that uses the built-in cameras of AR HMDs to enable accurate tracking of retro-reflective markers without the need to integrate any additional electronics into the HMD. The proposed framework can simultaneously track multiple tools without having previous knowledge of their geometry and only requires establishing a local network between the headset and a workstation. Our results show that the tracking and detection of the markers can be achieved with an accuracy of 0.09±0.06 mm on lateral translation, 0.42 ±0.32 mm on longitudinal translation and 0.80 ±0.39° for rotations around the vertical axis. Furthermore, to showcase the relevance of the proposed framework, we evaluate the system's performance in the context of surgical procedures. This use case was designed to replicate the scenarios of k-wire insertions in orthopedic procedures. For evaluation, seven surgeons were provided with visual navigation and asked to perform 24 injections using the proposed framework. A second study with ten participants served to investigate the capabilities of the framework in the context of more general scenarios. Results from these studies provided comparable accuracy to those reported in the literature for AR-based navigation procedures.

7.
Artigo em Inglês | MEDLINE | ID: mdl-38487569

RESUMO

The integration of navigation capabilities into the operating room has enabled surgeons take on more precise procedures guided by a pre-operative plan. Traditionally, navigation information based on this plan is presented using monitors in the surgical theater. But the monitors force the surgeon to frequently look away from the surgical area. Alternative technologies, such as augmented reality, have enabled surgeons to visualize navigation information in-situ. However, burdening the visual field with additional information can be distracting. In this work, we propose integrating haptic feedback into a surgical tool handle to enable surgical guidance capabilities. This property reduces the amount of visual information, freeing surgeons to maintain visual attention over the patient and the surgical site. To investigate the feasibility of this guidance paradigm we conducted a pilot study with six subjects. Participants traced paths, pinpointed locations and matched alignments with a mock surgical tool featuring a novel haptic handle. We collected quantitative data, tracking user's accuracy and time to completion as well as subjective cognitive load. Our results show that haptic feedback can guide participants using a tool to sub-millimeter and sub-degree accuracy with only little training. Participants were able to match a location with an average error of 0.82 mm, desired pivot alignments with an average error of 0.83° and desired rotations to 0.46°.

8.
Artigo em Inglês | MEDLINE | ID: mdl-38179232

RESUMO

Osteonecrosis of the Femoral Head (ONFH) is a progressive disease characterized by the death of bone cells due to the loss of blood supply. Early detection and treatment of this disease are vital in avoiding Total Hip Replacement. Early stages of ONFH can be diagnosed using Magnetic Resonance Imaging (MRI), commonly used intra-operative imaging modalities such as fluoroscopy frequently fail to depict the lesion. Therefore, increasing the difficulty of intra-operative localization of osteonecrosis. This work introduces a novel framework that enables the localization of necrotic lesions in Computed Tomography (CT) as a step toward localizing and visualizing necrotic lesions in intra-operative images. The proposed framework uses Deep Learning algorithms to enable automatic segmentation of femur, pelvis, and necrotic lesions in MRI. An additional step performs semi-automatic segmentation of these anatomies, excluding the necrotic lesions, in CT. A final step performs pairwise registration of the corresponding anatomies, allowing for the localization and visualization of the necrosis in CT. To investigate the feasibility of integrating the proposed framework in the surgical workflow, we conducted experiments on MRIs and CTs containing early-stage ONFH. Our results indicate that the proposed framework is able to segment the anatomical structures of interest and accurately register the femurs and pelvis of the corresponding volumes, allowing for the visualization and localization of the ONFH in CT and generated X-rays, which could enable intra-operative visualization of the necrotic lesions for surgical procedures such as core decompression of the femur.

9.
Biomed Opt Express ; 13(4): 2414-2430, 2022 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-35519277

RESUMO

The development and integration of intraoperative optical coherence tomography (iOCT) into modern operating rooms has motivated novel procedures directed at improving the outcome of ophthalmic surgeries. Although computer-assisted algorithms could further advance such interventions, the limited availability and accessibility of iOCT systems constrains the generation of dedicated data sets. This paper introduces a novel framework combining a virtual setup and deep learning algorithms to generate synthetic iOCT data in a simulated environment. The virtual setup reproduces the geometry of retinal layers extracted from real data and allows the integration of virtual microsurgical instrument models. Our scene rendering approach extracts information from the environment and considers iOCT typical imaging artifacts to generate cross-sectional label maps, which in turn are used to synthesize iOCT B-scans via a generative adversarial network. In our experiments we investigate the similarity between real and synthetic images, show the relevance of using the generated data for image-guided interventions and demonstrate the potential of 3D iOCT data synthesis.

10.
Int J Comput Assist Radiol Surg ; 17(5): 921-927, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35347565

RESUMO

PURPOSE: Mixed reality (MR) for image-guided surgery may enable unobtrusive solutions for precision surgery. To display preoperative treatment plans at the correct physical position, it is essential to spatially align it with the patient intra-operatively. Accurate alignment is safety critical because it will guide treatment, but cannot always be achieved for varied reasons. Effective visualization mechanisms that reveal misalignment are crucial to prevent adverse surgical outcomes to ensure safe execution. METHODS: We test the effectiveness of three MR visualization paradigms in revealing spatial misalignment: wireframe, silhouette, and heatmap, which encodes residual registration error. We conduct a user study among 12 participants and use an anthropomorphic phantom mimicking total shoulder arthroplasty. Participants wearing Microsoft HoloLens 2 are presented with 36 randomly ordered spatial (mis)alignments of a virtual glenoid model overlaid on the phantom, each rendered using one of the three methods. Users choose whether to accept or reject the spatial alignment at every trial. Upon completion, participants report their perceived difficulty while using the visualization paradigms. RESULTS: Across all visualization paradigms, the ability of participants to reliably judge the accuracy of spatial alignment was moderate (58.33%).The three visualization paradigms showed comparable performance. However, the heatmap-based visualization resulted in significantly better detectability than random chance ([Formula: see text]). Despite heatmap enabling the most accurate decisions according to our measurements, wireframe was the most liked paradigm (50 %), followed by silhouette (41.7 %) and heatmap (8.3 %). CONCLUSION: Our findings suggest that conventional mixed reality visualization paradigms are not sufficiently effective in enabling users to differentiate between accurate and inaccurate spatial alignment of virtual content to the environment.


Assuntos
Realidade Aumentada , Cirurgia Assistida por Computador , Humanos , Imagens de Fantasmas , Cirurgia Assistida por Computador/métodos
11.
Artigo em Inglês | MEDLINE | ID: mdl-37396671

RESUMO

Subretinal injection (SI) is an ophthalmic surgical procedure that allows for the direct injection of therapeutic substances into the subretinal space to treat vitreoretinal disorders. Although this treatment has grown in popularity, various factors contribute to its difficulty. These include the retina's fragile, nonregenerative tissue, as well as hand tremor and poor visual depth perception. In this context, the usage of robotic devices may reduce hand tremors and facilitate gradual and controlled SI. For the robot to successfully move to the target area, it needs to understand the spatial relationship between the attached needle and the tissue. The development of optical coherence tomography (OCT) imaging has resulted in a substantial advancement in visualizing retinal structures at micron resolution. This paper introduces a novel foundation for an OCT-guided robotic steering framework that enables a surgeon to plan and select targets within the OCT volume. At the same time, the robot automatically executes the trajectories necessary to achieve the selected targets. Our contribution consists of a novel combination of existing methods, creating an intraoperative OCT-Robot registration pipeline. We combined straightforward affine transformation computations with robot kinematics and a deep neural network-determined tool-tip location in OCT. We evaluate our framework's capability in a cadaveric pig eye open-sky procedure and using an aluminum target board. Targeting the subretinal space of the pig eye produced encouraging results with a mean Euclidean error of 23.8µm.

12.
J Imaging ; 9(1)2022 Dec 23.
Artigo em Inglês | MEDLINE | ID: mdl-36662102

RESUMO

Three decades after the first set of work on Medical Augmented Reality (MAR) was presented to the international community, and ten years after the deployment of the first MAR solutions into operating rooms, its exact definition, basic components, systematic design, and validation still lack a detailed discussion. This paper defines the basic components of any Augmented Reality (AR) solution and extends them to exemplary Medical Augmented Reality Systems (MARS). We use some of the original MARS applications developed at the Chair for Computer Aided Medical Procedures and deployed into medical schools for teaching anatomy and into operating rooms for telemedicine and surgical guidance throughout the last decades to identify the corresponding basic components. In this regard, the paper is not discussing all past or existing solutions but only aims at defining the principle components and discussing the particular domain modeling for MAR and its design-development-validation process, and providing exemplary cases through the past in-house developments of such solutions.

13.
IEEE Trans Vis Comput Graph ; 28(12): 4156-4171, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-33979287

RESUMO

Estimating the depth of virtual content has proven to be a challenging task in Augmented Reality (AR) applications. Existing studies have shown that the visual system makes use of multiple depth cues to infer the distance of objects, occlusion being one of the most important ones. The ability to generate appropriate occlusions becomes particularly important for AR applications that require the visualization of augmented objects placed below a real surface. Examples of these applications are medical scenarios in which the visualization of anatomical information needs to be observed within the patient's body. In this regard, existing works have proposed several focus and context (F+C) approaches to aid users in visualizing this content using Video See-Through (VST) Head-Mounted Displays (HMDs). However, the implementation of these approaches in Optical See-Through (OST) HMDs remains an open question due to the additive characteristics of the display technology. In this article, we, for the first time, design and conduct a user study that compares depth estimation between VST and OST HMDs using existing in-situ visualization methods. Our results show that these visualizations cannot be directly transferred to OST displays without increasing error in depth perception tasks. To tackle this gap, we perform a structured decomposition of the visual properties of AR F+C methods to find best-performing combinations. We propose the use of chromatic shadows and hatching approaches transferred from computer graphics. In a second study, we perform a factorized analysis of these combinations, showing that varying the shading type and using colored shadows can lead to better depth estimation when using OST HMDs.


Assuntos
Realidade Aumentada , Gráficos por Computador , Humanos , Interface Usuário-Computador , Desenho de Equipamento , Percepção de Profundidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...